Vision-Language Pre-Training (VLP) has shown promising capabilities to align image and text pairs, facilitating a broad variety of cross-modal learning tasks. However, we observe that VLP models often lack the visual grounding/localization capability which is critical for many downstream tasks such as visual reasoning. In this work, we propose a novel Position-guided Text Prompt (PTP) paradigm to enhance the visual grounding ability of cross-modal models trained with VLP. Specifically, in the VLP phase, PTP divides the image into $N\times N$ blocks, and identifies the objects in each block through the widely used object detector in VLP. It then reformulates the visual grounding task into a fill-in-the-blank problem given a PTP by encouraging the model to predict the objects in the given blocks or regress the blocks of a given object, e.g. filling `P" or ``O" in aPTP ``The block P has a O". This mechanism improves the visual grounding capability of VLP models and thus helps them better handle various downstream tasks. By introducing PTP into several state-of-the-art VLP frameworks, we observe consistently significant improvements across representative cross-modal learning model architectures and several benchmarks, e.g. zero-shot Flickr30K Retrieval (+4.8 in average recall@1) for ViLT \cite{vilt} baseline, and COCO Captioning (+5.3 in CIDEr) for SOTA BLIP \cite{blip} baseline. Moreover, PTP achieves comparable results with object-detector based methods, and much faster inference speed since PTP discards its object detector for inference while the later cannot. Our code and pre-trained weight will be released at \url{https://github.com/sail-sg/ptp}.
translated by 谷歌翻译
This paper is about an extraordinary phenomenon. Suppose we don't use any low-light images as training data, can we enhance a low-light image by deep learning? Obviously, current methods cannot do this, since deep neural networks require to train their scads of parameters using copious amounts of training data, especially task-related data. In this paper, we show that in the context of fundamental deep learning, it is possible to enhance a low-light image without any task-related training data. Technically, we propose a new, magical, effective and efficient method, termed \underline{Noi}se \underline{SE}lf-\underline{R}egression (NoiSER), which learns a gray-world mapping from Gaussian distribution for low-light image enhancement (LLIE). Specifically, a self-regression model is built as a carrier to learn a gray-world mapping during training, which is performed by simply iteratively feeding random noise. During inference, a low-light image is directly fed into the learned mapping to yield a normal-light one. Extensive experiments show that our NoiSER is highly competitive to current task-related data based LLIE models in terms of quantitative and visual results, while outperforming them in terms of the number of parameters, training time and inference speed. With only about 1K parameters, NoiSER realizes about 1 minute for training and 1.2 ms for inference with 600$\times$400 resolution on RTX 2080 Ti. Besides, NoiSER has an inborn automated exposure suppression capability and can automatically adjust too bright or too dark, without additional manipulations.
translated by 谷歌翻译
Low-light stereo image enhancement (LLSIE) is a relatively new task to enhance the quality of visually unpleasant stereo images captured in dark conditions. So far, very few studies on deep LLSIE have been explored due to certain challenging issues, i.e., the task has not been well addressed, and current methods clearly suffer from two shortages: 1) insufficient cross-view interaction; 2) lacking long-range dependency for intra-view learning. In this paper, we therefore propose a novel LLSIE model, termed \underline{Suf}ficient C\underline{r}oss-View \underline{In}teraction Network (SufrinNet). To be specific, we present sufficient inter-view interaction module (SIIM) to enhance the information exchange across views. SIIM not only discovers the cross-view correlations at different scales, but also explores the cross-scale information interaction. Besides, we present a spatial-channel information mining block (SIMB) for intra-view feature extraction, and the benefits are twofold. One is the long-range dependency capture to build spatial long-range relationship, and the other is expanded channel information refinement that enhances information flow in channel dimension. Extensive experiments on Flickr1024, KITTI 2012, KITTI 2015 and Middlebury datasets show that our method obtains better illumination adjustment and detail recovery, and achieves SOTA performance compared to other related methods. Our codes, datasets and models will be publicly available.
translated by 谷歌翻译
MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, without focusing on token mixer design: we introduce several baseline models under MetaFormer using the most basic or common mixers, and summarize our observations as follows: (1) MetaFormer ensures solid lower bound of performance. By merely adopting identity mapping as the token mixer, the MetaFormer model, termed IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works well with arbitrary token mixers. When specifying the token mixer as even a random matrix to mix tokens, the resulting model RandFormer yields an accuracy of >81%, outperforming IdentityFormer. Rest assured of MetaFormer's results when new token mixers are adopted. (3) MetaFormer effortlessly offers state-of-the-art results. With just conventional token mixers dated back five years ago, the models instantiated from MetaFormer already beat state of the art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable convolutions as the token mixer, the model termed ConvFormer, which can be regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer sets new record on ImageNet-1K. By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85.5% at 224x224 resolution, under normal supervised training without external data or distillation. In our expedition to probe MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of activation compared with GELU yet achieves better performance. We expect StarReLU to find great potential in MetaFormer-like models alongside other neural networks.
translated by 谷歌翻译
自适应梯度算法借用重球加速度的移动平均思想,以估计梯度的准确梯度矩和二阶矩,以加速收敛。然而,在理论上,在理论上,在许多经验情况下,在自适应梯度环境下,Nesterov加速度比重球加速度快的速度快得多。在这项工作中,我们提出了Adan的自适应Nesterov动量算法,以有效加快深层神经网络的训练。 Adan首先重新制定了Nesterov加速度,以开发新的Nesterov动量估计(NME)方法,该方法避免了外推点上计算梯度的额外计算和内存开销。然后,Adan采用NME来估计自适应梯度算法中梯度的一阶和二阶时刻,以进行收敛加速。此外,我们证明Adan在$ O(\ epsilon^{ - 3.5})内找到了$ \ epsilon $ - 附近的一阶固定点,$最著名的下限。广泛的实验结果表明,Adan超过了视觉变压器(VIT)和CNN上的相应SOTA优化器,并为许多流行网络设置了新的SOTA,例如Resnet,Convnext,Vit,Vit,Swin,Mae,Mae,LSTM,LSTM,Transformer-XL和BERT,以及BERT和BERT和BERT 。更令人惊讶的是,Adan可以利用SOTA优化器的一半培训成本(时代)在E.T.C. Vit和Resnet上获得更高或可比的性能,并且还显示出对大型Minibatch尺寸的宽容,例如1K到32K。我们希望Adan能够通过降低培训成本并减轻尝试各种架构的不同优化者的工程负担来为深度学习的发展做出贡献。代码将在https://github.com/sail-sg/adan上发布。
translated by 谷歌翻译
本文提出了一个视频图形变压器(VGT)模型,用于视频Quetion Answering(VideoQA)。 VGT的唯一性是双重的:1)它设计了一个动态图形变压器模块,该模块通过明确捕获视觉对象,它们的关系和动态来编码视频,以进行复杂的时空推理; 2)它利用了删除的视频和文本变压器,以比较视频和文本以执行质量检查,而不是纠缠的跨模式变压器进行答案分类。视觉文本通信是通过其他跨模式相互作用模块完成的。借助更合理的视频编码和质量检查解决方案,我们表明VGT可以在挑战动态关系推理的视频中取得更好的性能,而不是在没有预处理的情况下。它的性能甚至超过了那些被数百万个外部数据鉴定的模型。我们进一步表明,VGT也可以从自我监督的交叉模式预处理中受益匪浅,但数据的数量级较小。这些结果清楚地表明了VGT的有效性和优势,并揭示了其具有更高数据预处理的潜力。通过全面的分析和一些启发式观察,我们希望VGT能够在现实视频中促进VQA研究超越粗略的认识/描述,以实现细粒度的关系推理。我们的代码可在https://github.com/sail-sg/vgt上找到。
translated by 谷歌翻译
当相互作用数据稀缺时,深厚的增强学习(RL)算法遭受了严重的性能下降,这限制了其现实世界的应用。最近,视觉表示学习已被证明是有效的,并且有望提高RL样品效率。这些方法通常依靠对比度学习和数据扩展来训练状态预测的过渡模型,这与在RL中使用模型的方式不同 - 基于价值的计划。因此,学到的模型可能无法与环境保持良好状态并产生一致的价值预测,尤其是当国家过渡不是确定性的情况下。为了解决这个问题,我们提出了一种称为价值一致表示学习(VCR)的新颖方法,以学习与决策直接相关的表示形式。更具体地说,VCR训练一个模型,以预测基于当前的状态(也称为“想象的状态”)和一系列动作。 VCR没有将这个想象中的状态与环境返回的真实状态保持一致,而是在两个状态上应用$ q $ - 价值头,并获得了两个行动值分布。然后将距离计算并最小化以迫使想象的状态产生与真实状态相似的动作值预测。我们为离散和连续的动作空间开发了上述想法的两个实现。我们对Atari 100K和DeepMind Control Suite基准测试进行实验,以验证其提高样品效率的有效性。已经证明,我们的方法实现了无搜索RL算法的新最新性能。
translated by 谷歌翻译
在发展强化学习(RL)培训系统方面取得了重大进展。过去的作品,例如Impala,Apex,Seed RL,样本工厂等,旨在改善系统的整体吞吐量。在本文中,我们试图解决RL训练系统中的常见瓶颈,即平行环境执行,这通常是整个系统中最慢的部分,但很少受到关注。通过针对RL环境的策划设计,我们改善了不同硬件设置的RL环境模拟速度,从笔记本电脑和适度的工作站到NVIDIA DGX-A100等高端机器。在高端机器上,Envpool在Atari环境上的环境执行每秒可实现100万帧,在Mujoco环境上每秒执行300万帧。在笔记本电脑上运行时,Envpool的速度是Python子过程的2.8倍。此外,在开源社区中已经证明了与现有RL培训库的极大兼容性,包括Cleanrl,RL_Games,DeepMind Acme等。最后,Envpool允许研究人员以更快的速度迭代他们的想法,并具有巨大的潜力,并具有巨大的潜力事实上的RL环境执行引擎。示例运行表明,在笔记本电脑上训练Atari Pong和Mujoco Ant只需5分钟即可。 Envpool已经在https://github.com/sail-sg/envpool上开源。
translated by 谷歌翻译
对于无监督的预处理,蒙版重建预训练(MRP)接近随机掩盖输入贴片,然后通过自动编码器重建这些掩盖贴片的像素或语义特征。然后,对于下游任务,对经过预处理的编码器进行微调显着超过了从头开始训练的常规监督学习(SL)。但是,目前尚不清楚1)MRP如何在预科阶段执行语义学习以及2)为什么它有助于下游任务。为了解决这些问题,我们从理论上表明,在两/单层的卷积编码器/解码器的自动编码器上,MRP可以在预处理数据集中捕获所有判别语义,因此显示出其在下游任务上的SL上的可证明的改进。具体而言,我们假设预处理数据集包含比率$ 1- \ mu $的多视图样本和比率$ \ mu $的单视图样本,其中多/单视图样本具有多个/单个歧视性语义。然后,为了预处理,我们证明1)MRP编码器的卷积内核捕获了预科数据中的所有歧视性语义; 2)卷积内核最多可以捕获一种语义。因此,在下游监督的微调中,大多数语义都会被捕获,并且不同的语义不会融合在一起。这有助于下游微调网络轻松建立内核和语义类标签之间的关系。通过这种方式,MRP中的微调编码器可证明达到零测试误差,对于多视图和单视图测试数据,概率很高。相比之下,正如〜[3]所证明的那样,传统的SL只能在单视图测试数据的$ 0.5 \ mu $之间获得测试准确性。这些结果共同解释了MRP在下游任务中的好处。实验结果证明了多视图数据假设和我们的理论含义。
translated by 谷歌翻译
费米子神经网络(Ferminet)是最近提出的波函数ANSATZ,用于跨各种蒙特卡洛(VMC)方法来求解多电子Schr \“ {o} dinger dinger equation。确定性用于诱导反对称性。Ferminet被证明具有通用近似能力,具有单个决定因素,即在给定足够参数的情况下表示任何抗对称函数,但是,渐近计算的瓶颈来自slater slater slater slater scales the $ o,$ o缩放了$ o。 (n^3)$ for $ n $电子。在本文中,我们用成对的反对称构建代替Slater决定因素,这很容易实现,并且可以将计算成本降低到$ O(n^2)$。我们正式正式证明建立在排列量表的构造的成对结构可以普遍代表任何反对称功能。此外,可以通过连续实现这种普遍性当我们旨在表示地面波形时,近似值。
translated by 谷歌翻译